61 research outputs found

    Improved Bounds on Quantum Learning Algorithms

    Full text link
    In this article we give several new results on the complexity of algorithms that learn Boolean functions from quantum queries and quantum examples. Hunziker et al. conjectured that for any class C of Boolean functions, the number of quantum black-box queries which are required to exactly identify an unknown function from C is O(logCγ^C)O(\frac{\log |C|}{\sqrt{{\hat{\gamma}}^{C}}}), where γ^C\hat{\gamma}^{C} is a combinatorial parameter of the class C. We essentially resolve this conjecture in the affirmative by giving a quantum algorithm that, for any class C, identifies any unknown function from C using O(logCloglogCγ^C)O(\frac{\log |C| \log \log |C|}{\sqrt{{\hat{\gamma}}^{C}}}) quantum black-box queries. We consider a range of natural problems intermediate between the exact learning problem (in which the learner must obtain all bits of information about the black-box function) and the usual problem of computing a predicate (in which the learner must obtain only one bit of information about the black-box function). We give positive and negative results on when the quantum and classical query complexities of these intermediate problems are polynomially related to each other. Finally, we improve the known lower bounds on the number of quantum examples (as opposed to quantum black-box queries) required for (ϵ,δ)(\epsilon,\delta)-PAC learning any concept class of Vapnik-Chervonenkis dimension d over the domain {0,1}n\{0,1\}^n from Ω(dn)\Omega(\frac{d}{n}) to Ω(1ϵlog1δ+d+dϵ)\Omega(\frac{1}{\epsilon}\log \frac{1}{\delta}+d+\frac{\sqrt{d}}{\epsilon}). This new lower bound comes closer to matching known upper bounds for classical PAC learning.Comment: Minor corrections. 18 pages. To appear in Quantum Information Processing. Requires: algorithm.sty, algorithmic.sty to buil

    Learning Convex Partitions and Computing Game-theoretic Equilibria from Best Response Queries

    Full text link
    Suppose that an mm-simplex is partitioned into nn convex regions having disjoint interiors and distinct labels, and we may learn the label of any point by querying it. The learning objective is to know, for any point in the simplex, a label that occurs within some distance ϵ\epsilon from that point. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant mm uses poly(n,log(1ϵ))poly(n, \log \left( \frac{1}{\epsilon} \right)) queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant nn uses poly(m,log(1ϵ))poly(m, \log \left( \frac{1}{\epsilon} \right)) queries. We show via Kakutani's fixed-point theorem that these algorithms provide bounds on the best-response query complexity of computing approximate well-supported equilibria of bimatrix games in which one of the players has a constant number of pure strategies. We also partially extend our results to games with multiple players, establishing further query complexity bounds for computing approximate well-supported equilibria in this setting.Comment: 38 pages, 7 figures, second version strengthens lower bound in Theorem 6, adds footnotes with additional comments and fixes typo

    Harmonic Analysis of Boolean Networks: Determinative Power and Perturbations

    Get PDF
    Consider a large Boolean network with a feed forward structure. Given a probability distribution on the inputs, can one find, possibly small, collections of input nodes that determine the states of most other nodes in the network? To answer this question, a notion that quantifies the determinative power of an input over the states of the nodes in the network is needed. We argue that the mutual information (MI) between a given subset of the inputs X = {X_1, ..., X_n} of some node i and its associated function f_i(X) quantifies the determinative power of this set of inputs over node i. We compare the determinative power of a set of inputs to the sensitivity to perturbations to these inputs, and find that, maybe surprisingly, an input that has large sensitivity to perturbations does not necessarily have large determinative power. However, for unate functions, which play an important role in genetic regulatory networks, we find a direct relation between MI and sensitivity to perturbations. As an application of our results, we analyze the large-scale regulatory network of Escherichia coli. We identify the most determinative nodes and show that a small subset of those reduces the overall uncertainty of the network state significantly. Furthermore, the network is found to be tolerant to perturbations of its inputs

    Some Results on Average-Case Hardness Within the Polynomial Hierarchy

    Get PDF
    Abstract. We prove several results about the average-case complexity of problems in the Polynomial Hierarchy (PH). We give a connection among average-case, worst-case, and non-uniform complexity of optimization problems. Specifically, we show that if P NP is hard in the worst-case then it is either hard on the average (in the sense of Levin) or it is non-uniformly hard (i.e. it does not have small circuits). Recently, Gutfreund, Shaltiel and Ta-Shma (IEEE Conference on Computational Complexity, 2005) showed an interesting worst-case to averagecase connection for languages in NP, under a notion of average-case hardness defined using uniform adversaries. We show that extending their connection to hardness against quasi-polynomial time would imply that NEXP doesn’t have polynomial-size circuits. Finally we prove an unconditional average-case hardness result. We show that for each k, there is an explicit language in P Σ2 which is hard on average for circuits of size n k.

    Detecting controlling nodes of boolean regulatory networks

    Get PDF
    Boolean models of regulatory networks are assumed to be tolerant to perturbations. That qualitatively implies that each function can only depend on a few nodes. Biologically motivated constraints further show that functions found in Boolean regulatory networks belong to certain classes of functions, for example, the unate functions. It turns out that these classes have specific properties in the Fourier domain. That motivates us to study the problem of detecting controlling nodes in classes of Boolean networks using spectral techniques. We consider networks with unbalanced functions and functions of an average sensitivity less than 23k, where k is the number of controlling variables for a function. Further, we consider the class of 1-low networks which include unate networks, linear threshold networks, and networks with nested canalyzing functions. We show that the application of spectral learning algorithms leads to both better time and sample complexity for the detection of controlling nodes compared with algorithms based on exhaustive search. For a particular algorithm, we state analytical upper bounds on the number of samples needed to find the controlling nodes of the Boolean functions. Further, improved algorithms for detecting controlling nodes in large-scale unate networks are given and numerically studied

    A Fourier Analysis Based Attack against Physically Unclonable Functions

    Get PDF
    Electronic payment systems have leveraged the advantages offered by the RFID technology, whose security is promised to be improved by applying the notion of Physically Unclonable Functions (PUFs). Along with the evolution of PUFs, numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Our paper pursues this line of research by introducing a Fourier analysis based attack against PUFs. More specifically, this paper focuses on two main aspects of ML attacks, namely being provable and noise tolerant. In this regard, we prove that our attack is naturally integrated into a provable Probably Approximately Correct (PAC) model. Moreover, we show that our attacks against known PUF families are effective and applicable even in the presence of noise. Our proof relies heavily on the intrinsic properties of these PUF families, namely arbiter, Ring Oscillator (RO), and Bistable Ring (BR) PUF families. We believe that our new style of ML algorithms, which take advantage of the Fourier analysis principle, can offer better measures of PUF security

    On the Complexity of Compressing Obfuscation

    Get PDF
    Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al.~(PKC 2016) and Bitansky et al.~(TCC 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with \emph{statistical} security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory

    ON LEARNING DECISION TREES WITH LARGE OUTPUT DOMAINS

    No full text
    For two disjoint sets of variables, X and Y, and a class of functions C, we define DT(X,Y,C) to be the class of all decision trees over X whose leaves are functions from C over Y. We study the learnability of DT(X,Y,C) using membership and equivalence queries. Boolean decision trees, DT(X,emptyempty,{0,1}), were shown to be exactly learnable in [Bs93] but does this imply the learnability of decision trees that have non-boolean leaves? A simple encoding of all possible leaf values will work provided that the size of C is reasonable. Our investigation involves several cases where simple encoding is not feasible, i.e., when | C | is large. We show how to learn decision trees whose leaves are learnable concepts belonging to a class C, DT(X,Y,C), when the separation between the variables X and Y is known. A simple algorithm for decision trees whose leaves are constants, DT(X,emptyempty,C), is also presented. Each case above requires at least s separate executions of the algorithm from [Bs93] where s is the number of distinct leaves of the tree but we show that if C is a bounded lattice, DT(X,emptyempty,C) is learnable using only one execution of this algorithm.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]

    Learning DNF Over The Uniform Distribution Using A Quantum Example Oracle

    No full text
    We generalize the notion of PAC learning from an example oracle to a notion of efficient learning on a quantum computer using a quantum example oracle. This quantum example oracle is a natural extension of the traditional PAC example oracle, and it immediately follows that all PAC-learnable function classes are learnable in the quantum model. Furthermore, we obtain positive quantum learning results for classes that are not known to be PAC learnable. Specifically, we show that DNF is efficiently learnable with respect to the uniform distribution by a quantum algorithm using a quantum example oracle. While it was already known that DNF is uniform-learnable using a membership oracle, we prove that a quantum example oracle with respect to uniform is less powerful than a membership oracle

    Minimax bounds for active learning

    No full text
    This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for "boundary fragment" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below
    corecore